25 research outputs found
Assessing, testing, and challenging the computational power of quantum devices
Randomness is an intrinsic feature of quantum theory. The outcome of any measurement will be random, sampled from a probability distribution that is defined by the measured quantum state. The task of sampling from a prescribed probability distribution therefore seems to be a natural technological application of quantum devices. And indeed, certain random sampling tasks have been proposed to experimentally demonstrate the speedup of quantum over classical computation, so-called “quantum computational supremacy”.
In the research presented in this thesis, I investigate the complexity-theoretic and physical foundations of quantum sampling algorithms. Using the theory of computational complexity, I assess the computational power of natural quantum simulators and close loopholes in the complexity-theoretic argument for the classical intractability of quantum samplers (Part I). In particular, I prove anticoncentration for quantum circuit families that give rise to a 2-design and review methods for proving average-case hardness. I present quantum random sampling schemes that are tailored to large-scale quantum simulation hardware but at the same time rise up to the highest standard in terms of their complexity-theoretic underpinning. Using methods from property testing and quantum system identification, I shed light on the question, how and under which conditions quantum sampling devices can be tested or verified in regimes that are not simulable on classical computers (Part II). I present a no-go result that prevents efficient verification of quantum random sampling schemes as well as approaches using which this no-go result can be circumvented. In particular, I develop fully efficient verification protocols in what I call the measurement-device-dependent scenario in which single-qubit measurements are assumed to function with high accuracy. Finally, I try to understand the physical mechanisms governing the computational boundary between classical and quantum computing devices by challenging their computational power using tools from computational physics and the theory of computational complexity (Part III). I develop efficiently computable measures of the infamous Monte Carlo sign problem and assess those measures both in terms of their practicability as a tool for alleviating or easing the sign problem and the computational complexity of this task.
An overarching theme of the thesis is the quantum sign problem which arises due to destructive interference between paths – an intrinsically quantum effect. The (non-)existence of a sign problem takes on the role as a criterion which delineates the boundary between classical and quantum computing devices. I begin the thesis by identifying the quantum sign problem as a root of the computational intractability of quantum output probabilities. It turns out that the intricate structure of the probability distributions the sign problem gives rise to, prohibits their verification from few samples. In an ironic twist, I show that assessing the intrinsic sign problem of a quantum system is again an intractable problem
Anticoncentration theorems for schemes showing a quantum speedup
One of the main milestones in quantum information science is to realise
quantum devices that exhibit an exponential computational advantage over
classical ones without being universal quantum computers, a state of affairs
dubbed quantum speedup, or sometimes "quantum computational supremacy". The
known schemes heavily rely on mathematical assumptions that are plausible but
unproven, prominently results on anticoncentration of random prescriptions. In
this work, we aim at closing the gap by proving two anticoncentration theorems
and accompanying hardness results, one for circuit-based schemes, the other for
quantum quench-type schemes for quantum simulations. Compared to the few other
known such results, these results give rise to a number of comparably simple,
physically meaningful and resource-economical schemes showing a quantum speedup
in one and two spatial dimensions. At the heart of the analysis are tools of
unitary designs and random circuits that allow us to conclude that universal
random circuits anticoncentrate as well as an embedding of known circuit-based
schemes in a 2D translation-invariant architecture.Comment: 12+2 pages, added applications sectio
Bell sampling from quantum circuits
A central challenge in the verification of quantum computers is benchmarking
their performance as a whole and demonstrating their computational
capabilities. In this work, we find a model of quantum computation, Bell
sampling, that can be used for both of those tasks and thus provides an ideal
stepping stone towards fault-tolerance. In Bell sampling, we measure two copies
of a state prepared by a quantum circuit in the transversal Bell basis. We show
that the Bell samples are classically intractable to produce and at the same
time constitute what we call a circuit shadow: from the Bell samples we can
efficiently extract information about the quantum circuit preparing the state,
as well as diagnose circuit errors. In addition to known properties that can be
efficiently extracted from Bell samples, we give two new and efficient
protocols, a test for the depth of the circuit and an algorithm to estimate a
lower bound to the number of T gates in the circuit. With some additional
measurements, our algorithm learns a full description of states prepared by
circuits with low T -count.Comment: 5+14 pages, 2 figures. Comments welcom
Analogue Quantum Simulation: A Philosophical Prospectus
This paper provides the first systematic philosophical analysis of an increasingly important part of modern scientific practice: analogue quantum simulation. We introduce the distinction between `simulation' and `emulation' as applied in the context of two case studies. Based upon this distinction, and building upon ideas from the recent philosophical literature on scientific understanding, we provide a normative framework to isolate and support the goals of scientists undertaking analogue quantum simulation and emulation. We expect our framework to be useful to both working scientists and philosophers of science interested in cutting-edge scientific practice
Analogue Quantum Simulation: A Philosophical Prospectus
This paper provides the first systematic philosophical analysis of an increasingly important part of modern scientific practice: analogue quantum simulation. We introduce the distinction between `simulation' and `emulation' as applied in the context of two case studies. Based upon this distinction, and building upon ideas from the recent philosophical literature on scientific understanding, we provide a normative framework to isolate and support the goals of scientists undertaking analogue quantum simulation and emulation. We expect our framework to be useful to both working scientists and philosophers of science interested in cutting-edge scientific practice
The power of fixing a few qubits in proofs
What could happen if we pinned a single qubit of a system and fixed it in a particular state? First, we show that this leads to difficult static questions about the ground-state properties of local Hamiltonian problems with restricted types of terms. In particular, we show that the pinned commuting and pinned stoquastic Local Hamiltonian problems are quantum-Merlin-Arthur–complete. Second, we investigate pinned dynamics and demonstrate that fixing a single qubit via often repeated measurements results in universal quantum computation with commuting Hamiltonians. Finally, we discuss variants of the ground-state connectivity (GSCON) problem in light of pinning, and show that stoquastic GSCON is quantum-classical Merlin-Arthur–complete
Semi-device-dependent blind quantum tomography
Extracting tomographic information about quantum states is a crucial task in
the quest towards devising high-precision quantum devices. Current schemes
typically require measurement devices for tomography that are a priori
calibrated to a high precision. Ironically, the accuracy of the measurement
calibration is fundamentally limited by the accuracy of state preparation,
establishing a vicious cycle. Here, we prove that this cycle can be broken and
the fundamental dependence on the measurement devices significantly relaxed. We
show that exploiting the natural low-rank structure of quantum states of
interest suffices to arrive at a highly scalable blind tomography scheme with a
classically efficient post-processing algorithm. We further improve the
efficiency of our scheme by making use of the sparse structure of the
calibrations. This is achieved by relaxing the blind quantum tomography problem
to the task of de-mixing a sparse sum of low-rank quantum states. Building on
techniques from model-based compressed sensing, we prove that the proposed
algorithm recovers a low-rank quantum state and the calibration provided that
the measurement model exhibits a restricted isometry property. For generic
measurements, we show that our algorithm requires a close-to-optimal number
measurement settings for solving the blind tomography task. Complementing these
conceptual and mathematical insights, we numerically demonstrate that blind
quantum tomography is possible by exploiting low-rank assumptions in a
practical setting inspired by an implementation of trapped ions using
constrained alternating optimization.Comment: 22 pages, 8 Figure
Page curves and typical entanglement in linear optics
Bosonic Gaussian states are a special class of quantum states in an infinite
dimensional Hilbert space that are relevant to universal continuous-variable
quantum computation as well as to near-term quantum sampling tasks such as
Gaussian Boson Sampling. In this work, we study entanglement within a set of
squeezed modes that have been evolved by a random linear optical unitary. We
first derive formulas that are asymptotically exact in the number of modes for
the R\'enyi-2 Page curve (the average R\'enyi-2 entropy of a subsystem of a
pure bosonic Gaussian state) and the corresponding Page correction (the average
information of the subsystem) in certain squeezing regimes. We then prove
various results on the typicality of entanglement as measured by the R\'enyi-2
entropy by studying its variance. Using the aforementioned results for the
R\'enyi-2 entropy, we upper and lower bound the von Neumann entropy Page curve
and prove certain regimes of entanglement typicality as measured by the von
Neumann entropy. Our main proofs make use of a symmetry property obeyed by the
average and the variance of the entropy that dramatically simplifies the
averaging over unitaries. In this light, we propose future research directions
where this symmetry might also be exploited. We conclude by discussing
potential applications of our results and their generalizations to Gaussian
Boson Sampling and to illuminating the relationship between entanglement and
computational complexity.Comment: 29 pages; 2 figures. Version 2: small updates to match journal
versio